点云是代表和存储3D几何数据的广泛使用的技术之一。在过去,已经提出了几种用于处理点云的方法。诸如PointNet和FoldingNet之类的方法已显示出3D形状分类和分割等任务的有希望的结果。这项工作提出了一个树结构化的自动编码器框架,以使用图形卷积利用层次信息来生成点云的强大嵌入。我们执行多个实验,以评估提出的编码器体系结构生成的嵌入质量,并可视化T-SNE映射,以突出显示其区分不同对象类的能力。我们进一步证明了所提出的框架在以下应用程序中的适用性:3D点云完成和基于单图的3D重建。
translated by 谷歌翻译
手写文档映像二值化由于书面内容和复杂的背景属性,如页面样式,纸张质量,污渍,阴影梯度和非均匀照明等复杂背景属性而挑战。虽然传统的阈值方法没有有效地推广在这种具有挑战性的真实情景中,但是在提供足够的训练数据时,基于深度的基于学习的方法表现得相对较好。但是,现有数据集的大小和多样性有限。这项工作提出了LS-HDIB - 一个大规模的手写文件映像二值化数据集,其中包含跨越众多真实情景的百万个文档图像。此外,我们介绍了一种新颖的技术,它使用自适应阈值和无缝克隆方法的组合来创建数据集,以准确的基础事实。通过广泛的定量和定性评估超过八种不同的基于深度学习的模型,我们在LS-HDIB数据集上培训并在看不见的图像上进行测试时,我们展示了这些模型的性能的增强。
translated by 谷歌翻译
We present NusaCrowd, a collaborative initiative to collect and unite existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have has brought together 137 datasets and 117 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their effectiveness has been demonstrated in multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and its local languages. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and its local languages. Our work is intended to help advance natural language processing research in under-represented languages.
translated by 谷歌翻译
Cloud computing holds the promise of reduced costs through economies of scale. To realize this promise, cloud computing vendors typically solve sequential resource allocation problems, where customer workloads are packed on shared hardware. Virtual machines (VM) form the foundation of modern cloud computing as they help logically abstract user compute from shared physical infrastructure. Traditionally, VM packing problems are solved by predicting demand, followed by a Model Predictive Control (MPC) optimization over a future horizon. We introduce an approximate formulation of an industrial VM packing problem as an MILP with soft-constraints parameterized by the predictions. Recently, predict-and-optimize (PnO) was proposed for end-to-end training of prediction models by back-propagating the cost of decisions through the optimization problem. But, PnO is unable to scale to the large prediction horizons prevalent in cloud computing. To tackle this issue, we propose the Predict-and-Critic (PnC) framework that outperforms PnO with just a two-step horizon by leveraging reinforcement learning. PnC jointly trains a prediction model and a terminal Q function that approximates cost-to-go over a long horizon, by back-propagating the cost of decisions through the optimization problem \emph{and from the future}. The terminal Q function allows us to solve a much smaller two-step horizon optimization problem than the multi-step horizon necessary in PnO. We evaluate PnO and the PnC framework on two datasets, three workloads, and with disturbances not modeled in the optimization problem. We find that PnC significantly improves decision quality over PnO, even when the optimization problem is not a perfect representation of reality. We also find that hardening the soft constraints of the MILP and back-propagating through the constraints improves decision quality for both PnO and PnC.
translated by 谷歌翻译
Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model $M$ and state transition dataset, we wish to best approximate the system model while being bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods.
translated by 谷歌翻译
This paper studies audio-visual suppression for egocentric videos -- where the speaker is not captured in the video. Instead, potential noise sources are visible on screen with the camera emulating the off-screen speaker's view of the outside world. This setting is different from prior work in audio-visual speech enhancement that relies on lip and facial visuals. In this paper, we first demonstrate that egocentric visual information is helpful for noise suppression. We compare object recognition and action classification based visual feature extractors, and investigate methods to align audio and visual representations. Then, we examine different fusion strategies for the aligned features, and locations within the noise suppression model to incorporate visual information. Experiments demonstrate that visual features are most helpful when used to generate additive correction masks. Finally, in order to ensure that the visual features are discriminative with respect to different noise types, we introduce a multi-task learning framework that jointly optimizes audio-visual noise suppression and video based acoustic event detection. This proposed multi-task framework outperforms the audio only baseline on all metrics, including a 0.16 PESQ improvement. Extensive ablations reveal the improved performance of the proposed model with multiple active distractors, over all noise types and across different SNRs.
translated by 谷歌翻译
仇恨言语分类一直是自然语言处理中的一个长期问题。但是,即使有许多仇恨言论检测方法,它们通常忽略了许多仇恨言论,因为它们在自然界中是隐含的。开发数据集以协助隐性仇恨言语分类的任务伴随着自己的挑战;困难是语言上的细微差别,改变了构成仇恨言论的定义以及劳动密集型的注释过程。这导致了可用于训练和测试此类系统的数据稀缺,当使用基于参数的变压器模型来解决该问题时,这会引起较高的差异问题。在本文中,我们探讨了各种优化和正则化技术,并开发了一种基于罗伯塔的新型模型,可实现最先进的性能。
translated by 谷歌翻译
推断线性关系是许多实证研究的核心。线性依赖性的度量应正确评估关系的强度,并符合对人群的有意义。 Pearson的相关系数(PCC)是双变量关系的\ textit {De-facto}量度,这两个方面都缺乏。估计的强度$ r $可能是由于样本量有限和数据非正态而可能错误的。在统计显着性测试的背景下,将$ p $值作为后验概率的错误解释导致I型错误 - 这是一个具有显着性测试的一般问题,扩展到PCC。同时测试多个假设时,此类错误会加剧。为了解决这些问题,我们提出了一种基于机器学习的预测数据校准方法,从本质上讲,该方法在预期的线性关系上进行了研究。使用校准数据计算PCC会产生校准的$ P $值,可以将其解释为后验概率以及校准的$ r $估计值,这是其他方法未提供的所需结果。此外,随之而来的对每个测试的独立解释可能会消除对多次测试校正的需求。我们提供了使用多个模拟和对现实世界数据的应用,有利于提出的方法的经验证据。
translated by 谷歌翻译
机器学习模型容易对远离培训分布的投入进行错误的预测。这阻碍了他们在自动驾驶汽车和医疗保健等安全至关重要应用中的部署。从单个数据点的训练分布转移的检测引起了人们的注意。已经提出了许多用于分发(OOD)检测的技术。但是在许多应用中,机器学习模型的输入形成了时间序列。时间序列数据中的OOD检测技术要么不利用序列中的时间关系,要么不提供任何检测保证。我们建议将偏离分布式时间均衡力偏差作为在时间序列数据中进行OOD检测的保形异常检测框架中的不符合度量度。导致提议的检测器编码,并保证在时间序列数据中进行虚假检测。我们通过在自动驾驶中实现计算机视觉数据集的最新结果来说明编码的功效。我们还表明,通过在生理步态感觉数据集上执行实验,可以将CODIT用于非视觉数据集中的OOD检测。代码,数据和训练有素的模型可在https://github.com/kaustubhsridhar/time-series-ood上找到。
translated by 谷歌翻译
基于深度学习(DL)的降尺度已成为地球科学中的流行工具。越来越多的DL方法被采用来降低降水量的降水量数据,并在局部(〜几公里甚至更小)的尺度上产生更准确和可靠的估计值。尽管有几项研究采用了降水的动力学或统计缩减,但准确性受地面真理的可用性受到限制。衡量此类方法准确性的一个关键挑战是将缩小的数据与点尺度观测值进行比较,这些观察值通常在如此小的尺度上是无法使用的。在这项工作中,我们进行了基于DL的缩减,以估计印度气象部(IMD)的当地降水数据,该数据是通过近似从车站位置到网格点的价值而创建的。为了测试不同DL方法的疗效,我们采用了四种不同的缩小方法并评估其性能。所考虑的方法是(i)深度统计缩小(DEEPSD),增强卷积长期记忆(ConvlstM),完全卷积网络(U-NET)和超分辨率生成对抗网络(SR-GAN)。 SR-GAN中使用的自定义VGG网络是在这项工作中使用沉淀数据开发的。结果表明,SR-GAN是降水数据缩减的最佳方法。 IMD站的降水值验证了缩小的数据。这种DL方法为统计缩减提供了有希望的替代方法。
translated by 谷歌翻译